전체 이미지를 대상으로 $\theta$를 바꿔가며 accuracy 변화를 확인하자

import

import torch 
from fastai.vision.all import *
import cv2
import numpy as np
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

data

path=untar_data(URLs.PETS)/'images'
path
Path('/home/khy/.fastai/data/oxford-iiit-pet/images')
files=get_image_files(path)
def label_func(f):
    if f[0].isupper():
        return 'cat' 
    else: 
        return 'dog' 
dls=ImageDataLoaders.from_name_func(path,files,label_func,item_tfms=Resize(512)) 

learn

lrnr=cnn_learner(dls,resnet34,metrics=error_rate)
lrnr.fine_tune(1)
epoch train_loss valid_loss error_rate time
0 0.173083 0.012212 0.004736 00:36
epoch train_loss valid_loss error_rate time
0 0.045010 0.005182 0.002030 00:44
net1=lrnr.model[0]
net2=lrnr.model[1] 
net2 = torch.nn.Sequential(
    torch.nn.AdaptiveAvgPool2d(output_size=1), 
    torch.nn.Flatten(),
    torch.nn.Linear(512,out_features=2,bias=False))
net=torch.nn.Sequential(net1,net2)
lrnr2=Learner(dls,net,metrics=accuracy) 
lrnr2.fine_tune(10) 
epoch train_loss valid_loss accuracy time
0 0.251735 0.554969 0.775372 01:25
epoch train_loss valid_loss accuracy time
0 0.101095 0.075245 0.972260 01:25
1 0.110497 0.145551 0.935047 01:24
2 0.108137 0.665859 0.706360 01:24
3 0.106231 0.212862 0.939784 01:25
4 0.086872 0.178903 0.941813 01:25
5 0.052827 0.067336 0.976996 01:25
6 0.038159 0.054754 0.981055 01:25
7 0.025814 0.046475 0.985115 01:25
8 0.013226 0.043639 0.984438 01:24
9 0.010679 0.041715 0.985792 01:25
interp = ClassificationInterpretation.from_learner(lrnr2)
interp.plot_confusion_matrix()
interp.print_classification_report()
              precision    recall  f1-score   support

         cat       0.99      0.97      0.98       492
         dog       0.98      0.99      0.99       986

    accuracy                           0.99      1478
   macro avg       0.99      0.98      0.98      1478
weighted avg       0.99      0.99      0.99      1478


$\theta=0.1$

1st

files
(#7390) [Path('/home/khy/.fastai/data/oxford-iiit-pet/images/boxer_128.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/Sphynx_142.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/British_Shorthair_203.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/Ragdoll_142.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/Persian_272.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/Bombay_200.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/shiba_inu_103.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/chihuahua_142.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/scottish_terrier_156.jpg'),Path('/home/khy/.fastai/data/oxford-iiit-pet/images/basset_hound_163.jpg')...]
fig, ax = plt.subplots(5,5) 
k=0 
for i in range(5):
    for j in range(5): 
        x, = first(dls.test_dl([PILImage.create(get_image_files(path)[k])]))
        camimg = torch.einsum('ij,jkl -> ikl', net2[2].weight, net1(x).squeeze())
        a,b = net(x).tolist()[0]
        catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
        if catprob>dogprob: 
            test=camimg[0]-torch.min(camimg[0])
            A1=torch.exp(-0.1*test)
            X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
            Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
            x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
            (x1*0.35).squeeze().show(ax=ax[i][j])
            ax[i][j].set_title("cat(%s)" % catprob.round(5))
        else: 
            test=camimg[1]-torch.min(camimg[1])
            A1=torch.exp(-0.1*test)
            X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
            Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
            x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
            (x1*0.35).squeeze().show(ax=ax[i][j])
            ax[i][j].set_title("dog(%s)" % dogprob.round(5))
        k=k+1 
fig.set_figwidth(16)            
fig.set_figheight(16)
fig.tight_layout()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

💡 MODE들을 CAT/DOG 폴더에 저장한다 $\to$ 사진을 불러온다(dls) $\to$ epoc=15로 학습한다(learner) $\to$ accuracy를 확인한다 $\to$ 값을 저장한다 $\to$ 총 100번 수행한 후 평균을 추출한다.

$\theta=0.01$

x, = first(dls.test_dl([PILImage.create(get_image_files(path)[1])]))
camimg = torch.einsum('ij,jkl -> ikl', net2[2].weight, net1(x).squeeze())
a,b = net(x).tolist()[0]
catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
if catprob>dogprob:
    test=camimg[0]-torch.min(camimg[0])
    A1=torch.exp(-0.01*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
else : 
    test=camimg[1]-torch.min(camimg[1])
    A1=torch.exp(-0.01*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
(x1*0.35).squeeze().show()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
<AxesSubplot:>
lrnr2.predict(x1)
('cat', tensor(0), tensor([0.9565, 0.0435]))
??ImageDataLoaders.from_lists
Signature:
ImageDataLoaders.from_lists(
    path,
    fnames,
    labels,
    valid_pct=0.2,
    seed: int = None,
    y_block=None,
    item_tfms=None,
    batch_tfms=None,
    bs=64,
    val_bs=None,
    shuffle=True,
    device=None,
)
Source:   
    @classmethod
    @delegates(DataLoaders.from_dblock)
    def from_lists(cls, path, fnames, labels, valid_pct=0.2, seed:int=None, y_block=None, item_tfms=None, batch_tfms=None,
                   **kwargs):
        "Create from list of `fnames` and `labels` in `path`"
        if y_block is None:
            y_block = MultiCategoryBlock if is_listy(labels[0]) and len(labels[0]) > 1 else (
                RegressionBlock if isinstance(labels[0], float) else CategoryBlock)
        dblock = DataBlock.from_columns(blocks=(ImageBlock, y_block),
                           splitter=RandomSplitter(valid_pct, seed=seed),
                           item_tfms=item_tfms,
                           batch_tfms=batch_tfms)
        return cls.from_dblock(dblock, (fnames, labels), path=path, **kwargs)
File:      ~/anaconda3/envs/bda2021/lib/python3.8/site-packages/fastai/vision/data.py
Type:      method
x1=x1.reshape(1,3,512,512)
net1.to('cpu')
net2.to('cpu')
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_819350/1428301772.py in <module>
----> 1 net1.to('cpu')
      2 net2.to('cpu')

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
    850             return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    851 
--> 852         return self._apply(convert)
    853 
    854     def register_backward_hook(

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/torch/nn/modules/module.py in _apply(self, fn)
    528     def _apply(self, fn):
    529         for module in self.children():
--> 530             module._apply(fn)
    531 
    532         def compute_should_use_set_data(tensor, tensor_applied):

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/torch/nn/modules/module.py in _apply(self, fn)
    550                 # `with torch.no_grad():`
    551                 with torch.no_grad():
--> 552                     param_applied = fn(param)
    553                 should_use_set_data = compute_should_use_set_data(param, param_applied)
    554                 if should_use_set_data:

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/torch/nn/modules/module.py in convert(t)
    848                 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
    849                             non_blocking, memory_format=convert_to_format)
--> 850             return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    851 
    852         return self._apply(convert)

RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

a = []    # 빈 리스트 생성
 
for i in range(10):
    a.append(0)    # append로 요소 추가
 
print(a)
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
a = []
for i in range(5) : 
    x, = first(dls.test_dl([PILImage.create(get_image_files(path)[i])]))
    camimg = torch.einsum('ij,jkl -> ikl', net2[2].weight, net1(x).squeeze())
    a,b = net(x).tolist()[0]
    catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
    if catprob>dogprob:
        test=camimg[0]-torch.min(camimg[0])
        A1=torch.exp(-0.01*test)
        X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
        Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
        x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
        a
    else : 
        test=camimg[1]-torch.min(camimg[1])
        A1=torch.exp(-0.01*test)
        X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
        Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
        x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
        a[i]=x1
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/tmp/ipykernel_776563/1239748666.py in <module>
     18         Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
     19         x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
---> 20         a[i]=x1

TypeError: 'float' object does not support item assignment

이미지 위에 그림그리기

import cv2
import numpy as np
import matplotlib.pyplot as plt
from PIL import ImageDraw
from PIL import ImageFont

SAMPLE

get_image_files(path)[0]
Path('/home/khy/.fastai/data/oxford-iiit-pet/images/boxer_128.jpg')
img = PILImage.create(get_image_files(path)[0])
img
x, = first(dls.test_dl([img]))
plt.imshow(x.squeeze().to('cpu')[0], cmap="gray")
plt.show()
img.shape
(333, 500)

box넣기

(w, h) = (img.shape[0], img.shape[1])
a=(w-512)*0.5
b=(h-512)*0.5
shape = [(a,b),(a+100,b+50)]
img1 = ImageDraw.Draw(img)  
img1.rectangle(shape, fill ="black", outline ="black")
img.show()
<AxesSubplot:>

text넣기

font = ImageFont.truetype("DejaVuSans.ttf", round(h
                                                  *0.08))
ImageDraw.Draw(img).text((a,b), 'DOG', (255, 255, 255), font=font)
img.show()
<AxesSubplot:>
img = PILImage.create(get_image_files(path)[3000])
img.show()
<AxesSubplot:>
img.shape
(1499, 1999)
x, = first(dls.test_dl([img]))
plt.imshow(x.squeeze().to('cpu')[0], cmap="gray")
plt.show()
(w, h) = (img.shape[0], img.shape[1])
a=(w-512)*0.5
b=(h-512)*0.5
shape = [(a,b),(a+100,b+50)]
#shape = [(30, 30), (130, 80)]
img1 = ImageDraw.Draw(img)  
img1.rectangle(shape, fill ="white", outline ="black")
ImageDraw.Draw(img).text((a, b), 'CAT', (0,0,0), font=font)
img.show()
<AxesSubplot:>

전체 적용하기

a=str(list(path.ls())[1]).split('/')[-1]
a.isupper()
False
print(str(list(path.ls())[1]).split('/')[-1][0].isupper())
print(str(list(path.ls())[0]).split('/')[-1][0].isupper())
True
False
str(list(path.ls())[0]).isupper()
False
get_image_files(path)[0]
Path('/home/khy/.fastai/data/oxford-iiit-pet/images/boxer_128.jpg')
dls.vocab
['cat', 'dog']
for i in range(7393) : img = PILImage.create(get_image_files(path)[i]) (w, h) = (img.shape[0], img.shape[1]) shape = [(0, 0), (w*0.3, h*0.1)] font = ImageFont.truetype("DejaVuSans.ttf", round(h*0.08)) name=str(list(path.ls())[i]).split('/')[-1] if name[0].isupper() == True : img1 = ImageDraw.Draw(img) img1.rectangle(shape, fill ="white", outline ="black") ImageDraw.Draw(img).text((5, 0), 'CAT', (0,0,0), font=font) img.save("Cat/"+name, 'png') else: img1 = ImageDraw.Draw(img) img1.rectangle(shape, fill ="black", outline ="black") ImageDraw.Draw(img).text((5, 0), 'DOG', (255,255,255), font=font) img.save("Dog/"+name, 'png')

path=untar_data(URLs.PETS)/'images'
path
Path('/home/khy/.fastai/data/oxford-iiit-pet/images')
files=get_image_files(path)
def label_func(f):
    if f[0].isupper():
        return 'cat' 
    else: 
        return 'dog' 
dls=ImageDataLoaders.from_name_func(path,files,label_func,item_tfms=Resize(512)) 
for i in range(7393) :
    img = PILImage.create(get_image_files(path)[i])
    img = img.resize([512,512], resample=None, box=None, reducing_gap=None)
    (w, h) = (img.shape[0], img.shape[1])
    shape = [(0, 0), (w*0.3, h*0.1)]
    font = ImageFont.truetype("DejaVuSans.ttf", round(h*0.08))
    name=str(list(path.ls())[i]).split('/')[-1]
    if name[0].isupper() == True :
        img1 = ImageDraw.Draw(img)  
        img1.rectangle(shape, fill ="white", outline ="black")
        ImageDraw.Draw(img).text((5, 0), 'CAT', (0,0,0), font=font)
        img.save("pet2/"+name, 'png')
    else: 
        img1 = ImageDraw.Draw(img)  
        img1.rectangle(shape, fill ="black", outline ="black")
        ImageDraw.Draw(img).text((5, 0), 'DOG', (255,255,255), font=font)
        img.save("pet2/"+name, 'png')
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/tmp/ipykernel_829447/2410470204.py in <module>
      1 for i in range(7393) :
----> 2     img = PILImage.create(get_image_files(path)[i])
      3     img = img.resize([512,512], resample=None, box=None, reducing_gap=None)
      4     (w, h) = (img.shape[0], img.shape[1])
      5     shape = [(0, 0), (w*0.3, h*0.1)]

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/fastcore/foundation.py in __getitem__(self, idx)
    109     def _xtra(self): return None
    110     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
--> 111     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
    112     def copy(self): return self._new(self.items.copy())
    113 

~/anaconda3/envs/bda2021/lib/python3.8/site-packages/fastcore/foundation.py in _get(self, i)
    113 
    114     def _get(self, i):
--> 115         if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i]
    116         i = mask2idxs(i)
    117         return (self.items.iloc[list(i)] if hasattr(self.items,'iloc')

IndexError: list index out of range

path2=Path('pet2') 
path2.ls()
(#7391) [Path('pet2/boxer_128.jpg'),Path('pet2/Sphynx_142.jpg'),Path('pet2/British_Shorthair_203.jpg'),Path('pet2/Ragdoll_142.jpg'),Path('pet2/Persian_272.jpg'),Path('pet2/Bombay_200.jpg'),Path('pet2/shiba_inu_103.jpg'),Path('pet2/chihuahua_142.jpg'),Path('pet2/scottish_terrier_156.jpg'),Path('pet2/basset_hound_163.jpg')...]
files=get_image_files(path2)
dls2=ImageDataLoaders.from_name_func(path2,files,label_func,item_tfms=Resize(512)) 
dls2= ImageDataLoaders.from_folder( path2, train='pet2', valid_pct=0.2, item_tfms=Resize(512))
lrnr3=cnn_learner(dls2,resnet34,metrics=error_rate)
lrnr3.fine_tune(1)
epoch train_loss valid_loss error_rate time
0 0.336038 0.011622 0.003385 00:36
epoch train_loss valid_loss error_rate time
0 0.000602 0.000042 0.000000 00:44
net3=lrnr3.model[0]
net4=lrnr3.model[1] 
net4 = torch.nn.Sequential(
    torch.nn.AdaptiveAvgPool2d(output_size=1), 
    torch.nn.Flatten(),
    torch.nn.Linear(512,out_features=2,bias=False))
net_new=torch.nn.Sequential(net3,net4)
lrnr4=Learner(dls2,net_new,metrics=accuracy) 
lrnr4.fine_tune(10) 
epoch train_loss valid_loss accuracy time
0 0.059553 4260.406250 0.662830 00:44
epoch train_loss valid_loss accuracy time
0 0.001249 0.000160 1.000000 00:44
1 0.001284 0.000086 1.000000 00:44
2 0.000234 0.000022 1.000000 00:44
3 0.000055 0.000010 1.000000 00:44
4 0.000017 0.000007 1.000000 00:44
5 0.000010 0.000005 1.000000 00:44
6 0.000007 0.000003 1.000000 00:44
7 0.000005 0.000003 1.000000 00:44
8 0.000005 0.000003 1.000000 00:44
9 0.000004 0.000002 1.000000 00:44
interp = ClassificationInterpretation.from_learner(lrnr4)
interp.plot_confusion_matrix()
interp.print_classification_report()
              precision    recall  f1-score   support

         cat       1.00      1.00      1.00       498
         dog       1.00      1.00      1.00       979

    accuracy                           1.00      1477
   macro avg       1.00      1.00      1.00      1477
weighted avg       1.00      1.00      1.00      1477

x, = first(dls2.test_dl([PILImage.create(get_image_files(path2)[3000])]))
camimg = torch.einsum('ij,jkl -> ikl', net4[2].weight, net3(x).squeeze())
x.shape
torch.Size([1, 3, 512, 512])
fig, (ax1,ax2) = plt.subplots(1,2) 
# 
dls2.train.decode((x,))[0].squeeze().show(ax=ax1)
ax1.imshow(camimg[0].to("cpu").detach(),alpha=0.5,extent=(0,512,512,0),interpolation='bilinear',cmap='magma')
#
dls2.train.decode((x,))[0].squeeze().show(ax=ax2)
ax2.imshow(camimg[1].to("cpu").detach(),alpha=0.5,extent=(0,512,512,0),interpolation='bilinear',cmap='magma')
fig.set_figwidth(8)            
fig.set_figheight(8)
fig.tight_layout()
fig, ax = plt.subplots(5,5) 
k=0 
for i in range(5):
    for j in range(5): 
        x, = first(dls2.test_dl([PILImage.create(get_image_files(path2)[k])]))
        camimg = torch.einsum('ij,jkl -> ikl', net4[2].weight, net3(x).squeeze())
        a,b = net_new(x).tolist()[0]
        catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
        if catprob>dogprob: 
            dls2.train.decode((x,))[0].squeeze().show(ax=ax[i][j])
            ax[i][j].imshow(camimg[0].to("cpu").detach(),alpha=0.5,extent=(0,511,511,0),interpolation='bilinear',cmap='magma')
            ax[i][j].set_title("cat(%s)" % catprob.round(5))
        else: 
            dls2.train.decode((x,))[0].squeeze().show(ax=ax[i][j])
            ax[i][j].imshow(camimg[1].to("cpu").detach(),alpha=0.5,extent=(0,511,511,0),interpolation='bilinear',cmap='magma')
            ax[i][j].set_title("dog(%s)" % dogprob.round(5))
        k=k+1 
fig.set_figwidth(16)            
fig.set_figheight(16)
fig.tight_layout()
x, = first(dls2.test_dl([PILImage.create(get_image_files(path2)[1])]))
camimg = torch.einsum('ij,jkl -> ikl', net4[2].weight, net3(x).squeeze())
a,b = net_new(x).tolist()[0]
catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
if catprob>dogprob:
    test=camimg[0]-torch.min(camimg[0])
    A1=torch.exp(-0.2*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
else : 
    test=camimg[1]-torch.min(camimg[1])
    A1=torch.exp(-0.2*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
(x1*0.35).squeeze().show()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
<AxesSubplot:>
x, = first(dls2.test_dl([PILImage.create(get_image_files(path2)[1])]))
camimg = torch.einsum('ij,jkl -> ikl', net4[2].weight, net3(x).squeeze())
a,b = net_new(x).tolist()[0]
catprob, dogprob = np.exp(a)/ (np.exp(a)+np.exp(b)) ,  np.exp(b)/ (np.exp(a)+np.exp(b)) 
if catprob>dogprob:
    test=camimg[0]-torch.min(camimg[0])
    A1=torch.exp(-0.2*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)
else : 
    test=camimg[1]-torch.min(camimg[1])
    A1=torch.exp(-0.2*test)
    X1=np.array(A1.to("cpu").detach(),dtype=np.float32)
    Y1=torch.Tensor(cv2.resize(X1,(512,512),interpolation=cv2.INTER_LINEAR))
    x1=x.squeeze().to('cpu')*Y1-torch.min(x.squeeze().to('cpu')*Y1)